Goto

Collaborating Authors

 boolean combination


Higher-arity PAC learning, VC dimension and packing lemma

Chernikov, Artem, Towsner, Henry

arXiv.org Machine Learning

The aim of this note is to overview some of our work in Chernikov, Towsner'20 (arXiv:2010.00726) developing higher arity VC theory (VC$_n$ dimension), including a generalization of Haussler packing lemma, and an associated tame (slice-wise) hypergraph regularity lemma; and to demonstrate that it characterizes higher arity PAC learning (PAC$_n$ learning) in $n$-fold product spaces with respect to product measures introduced by Kobayashi, Kuriyama and Takeuchi'15. We also point out how some of the recent results in arXiv:2402.14294, arXiv:2505.15688, arXiv:2509.20404 follow from our work in arXiv:2010.00726.


Learning Probabilistic Temporal Logic Specifications for Stochastic Systems

Roy, Rajarshi, Pote, Yash, Parker, David, Kwiatkowska, Marta

arXiv.org Artificial Intelligence

There has been substantial progress in the inference of formal behavioural specifications from sample trajectories, for example using Linear Temporal Logic (L TL). However, these techniques cannot handle specifications that correctly characterise systems with stochastic behaviour, which occur commonly in reinforcement learning and formal verification. We consider the passive learning problem of inferring a Boolean combination of probabilistic L TL (PL TL) formulas from a set of Markov chains, classified as either positive or negative. We propose a novel learning algorithm that infers concise PL TL specifications, leveraging grammar-based enumeration, search heuristics, probabilistic model checking and Boolean set-cover procedures. We demonstrate the effectiveness of our algorithm in two use cases: learning from policies induced by RL algorithms and learning from variants of a probabilistic model. In both cases, our method automatically and efficiently extracts PL TL specifications that succinctly characterize the temporal differences between the policies or model variants.


Counting Like Transformers: Compiling Temporal Counting Logic Into Softmax Transformers

Yang, Andy, Chiang, David

arXiv.org Artificial Intelligence

Deriving formal bounds on the expressivity of transformers, as well as studying transformers that are constructed to implement known algorithms, are both effective methods for better understanding the computational power of transformers. Towards both ends, we introduce the temporal counting logic $\textbf{K}_\text{t}$[#] alongside the RASP variant $\textbf{C-RASP}$. We show they are equivalent to each other, and that together they are the best-known lower bound on the formal expressivity of future-masked soft attention transformers with unbounded input size. We prove this by showing all $\textbf{K}_\text{t}$[#] formulas can be compiled into these transformers. As a case study, we demonstrate on paper how to use $\textbf{C-RASP}$ to construct simple transformer language models that, using greedy decoding, can only generate sentences that have given properties formally specified in $\textbf{K}_\text{t}$[#].


Automated reasoning support for Standpoint-OWL 2

Emmrich, Florian, Álvarez, Lucía Gómez, Strass, Hannes

arXiv.org Artificial Intelligence

We present a tool for modelling and reasoning with knowledge from various diverse (and possibly conflicting) viewpoints. The theoretical underpinnings are provided by enhancing base logics by standpoints according to a recently introduced formalism that we also recall. The tool works by translating the standpoint-enhanced version of the description logic SROIQ to its plain (i.e. classical) version. Existing reasoners can then be directly used to provide automated support for reasoning about diverse standpoints.


Many-valued Argumentation, Conditionals and a Probabilistic Semantics for Gradual Argumentation

Alviano, Mario, Giordano, Laura, Dupré, Daniele Theseider

arXiv.org Artificial Intelligence

In this paper we propose a general approach to define a many-valued preferential interpretation of gradual argumentation semantics. The approach allows for conditional reasoning over arguments and boolean combination of arguments, with respect to a class of gradual semantics, through the verification of graded (strict or defeasible) implications over a preferential interpretation. As a proof of concept, in the finitely-valued case, an Answer set Programming approach is proposed for conditional reasoning in a many-valued argumentation semantics of weighted argumentation graphs. The paper also develops and discusses a probabilistic semantics for gradual argumentation, which builds on the many-valued conditional semantics.


Reasoning About Causal Models With Infinitely Many Variables

Halpern, Joseph Y., Peters, Spencer

arXiv.org Artificial Intelligence

Generalized structural equations models (GSEMs) [Peters and Halpern 2021], are, as the name suggests, a generalization of structural equations models (SEMs). They can deal with (among other things) infinitely many variables with infinite ranges, which is critical for capturing dynamical systems. We provide a sound and complete axiomatization of causal reasoning in GSEMs that is an extension of the sound and complete axiomatization provided by Halpern [2000] for SEMs. Considering GSEMs helps clarify what properties Halpern's axioms capture.


Scalable Anytime Algorithms for Learning Formulas in Linear Temporal Logic

Raha, Ritam, Roy, Rajarshi, Fijalkow, Nathanaël, Neider, Daniel

arXiv.org Artificial Intelligence

Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning LTL formulas for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks.


Learning Concepts Described by Weight Aggregation Logic

van Bergerem, Steffen, Schweikardt, Nicole

arXiv.org Artificial Intelligence

We consider weighted structures, which extend ordinary relational structures by assigning weights, i.e. elements from a particular group or ring, to tuples present in the structure. We introduce an extension of first-order logic that allows to aggregate weights of tuples, compare such aggregates, and use them to build more complex formulas. We provide locality properties of fragments of this logic including Feferman-Vaught decompositions and a Gaifman normal form for a fragment called FOW1, as well as a localisation theorem for a larger fragment called FOWA1. This fragment can express concepts from various machine learning scenarios. Using the locality properties, we show that concepts definable in FOWA1 over a weighted background structure of at most polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time after pseudo-linear time preprocessing.


Foundations of Reasoning with Uncertainty via Real-valued Logics

Fagin, Ronald, Riegel, Ryan, Gray, Alexander

arXiv.org Artificial Intelligence

Real-valued logics underlie an increasing number of neuro-symbolic approaches, though typically their logical inference capabilities are characterized only qualitatively. We provide foundations for establishing the correctness and power of such systems. For the first time, we give a sound and complete axiomatization for a broad class containing all the common real-valued logics. This axiomatization allows us to derive exactly what information can be inferred about the combinations of real values of a collection of formulas given information about the combinations of real values of several other collections of formulas. We then extend the axiomatization to deal with weighted subformulas. Finally, we give a decision procedure based on linear programming for deciding, under certain natural assumptions, whether a set of our sentences logically implies another of our sentences.


Learning Concepts Definable in First-Order Logic with Counting

van Bergerem, Steffen

arXiv.org Artificial Intelligence

We study classification problems over relational background structures for hypotheses that are defined using logics with counting. The aim of this paper is to find learning algorithms running in time sublinear in the size of the background structure. We show that hypotheses defined by FOCN(P)-formulas over structures of polylogarithmic degree can be learned in sublinear time. Furthermore, we prove that for structures of unbounded degree there is no sublinear learning algorithm for first-order formulas.